- Upgrade to Windows 11 Pro for $18 - the lowest price this year
- One of the most reliable power banks I've tested can even inflate car tires (and get 50% off in this Black Friday deal)
- This is the smartest electronic precision screwdriver I've ever tested (and now get 10% off for Black Friday)
- How Ransomware Jeopardizes Healthcare Organizations
- Buy a Microsoft Office 2019 license for Mac or Windows for $27
Council Post: Is ChatGPT A Silver Bullet For Cybercriminals?
Executive Vice President at Proofpoint, overseeing global cybersecurity strategy.
By now, you’ve heard of ChatGPT—or more likely, you’ve heard that it’s coming to take your job whether you’re a programmer, journalist, musician or almost anything else.
The OpenAI chatbot has been accessible to all for mere months, yet it has already amassed millions of users, impressing with its ability to write everything from code to essays and lyrics.
You may have also heard that ChatGPT is about to set a fire under an already bubbling-hot cyber threat landscape, helping scammers write engaging, convincing and grammatically correct phishing emails or perfect malware code in seconds.
While ChatGPT is certainly impressive, do we really need to worry about it upping the ante on already menacing threats? In short, the answer is no—but there’s little to celebrate about that.
ChatGPT’s AI Adds Little—For Now
ChatGPT is unlikely to bolster the arsenal of most cybercriminals—not because it is not useful but because, in most cases, it adds very little.
For one, many social engineering emails aren’t designed to be perfect (grammatically or otherwise). Instead, they are often written poorly intentionally to find people who are more likely to engage. What’s more, headers, senders, attachments and URLs are all among the common threat indicators analyzed and flagged by robust detection technologies, and a well-written email will do little to change that.
Even in cases such as business email compromise (BEC), where there is more value to tone and accuracy, the email is just one part of the puzzle. Cybercriminals also need access to critical information such as payers, payees and other transaction details.
This information will likely be sourced from a compromised inbox, meaning threat actors can simply copy the style of an email sent from the victim’s account rather than using AI to craft one from scratch.
Even the most innovative forms of social engineering don’t lend themselves to the abuse of ChatGPT. Through research, we found a “marked increase in sophisticated, multi-touch phishing campaigns engaging in longer conversations across multiple personas.” While BEC actors will also play the long game, advanced persistent threat (APT) actors pioneered these types of attacks and haven’t needed to use AI to achieve their social engineering aims.
While cybercriminals or more advanced adversaries could use ChatGPT as a chatbot to conduct entire conversations with multiple victims en masse, they don’t usually need or want to. In most cases, APT campaigns reflect specific tasking and are well-planned, thoroughly researched, highly targeted and not suited to being launched at scale. Cybercriminals are awash in millions of compromised credentials and endpoints, and they’ve even built their own CRM-style systems to prioritize the manual escalation of privilege and lateral movement required to conduct a typical ransomware/data extortion attack.
Another Added Extra
It may not be having a revolutionary impact at this stage, but that’s not to say threat actors aren’t already putting ChatGPT to some use. For now, however, it’s just another easy-to-access toolkit at their disposal.
The world is not short on off-the-shelf malware, and code written for the same purpose by ChatGPT is not any better than the current offering. While you may be able to instruct ChatGPT to write code more explicitly, it doesn’t make the resulting malware any better at what it needs to do—that is, evade EDRs and infect machines.
It doesn’t save time or effort elsewhere, either. To launch a malware attack, threat actors still need to carry out a host of menial tasks—rotating their infrastructure, registering domains and recalibrating after threat researchers catch and condemn the attacks.
Nefarious operators need to sell their code as well as spread it, and ChatGPT does nothing to help automate the most important components of this process.
For these reasons and more, it is unlikely to impact the threat landscape until it exceeds the capabilities of the currently available tools. Many of these tools have been honed over decades to do much more than an admittedly highly advanced chatbot.
Of course, this may well change. ChatGPT is already progressing quickly and capturing the attention of nation-states and cybersecurity professionals, so this is unlikely to be the end of the story.
Even so, whether cybercriminals use such tools now or in the future to enhance social engineering, phishing or malware attacks, the strategies needed to detect and deter these tactics remain largely the same.
How To Keep Threats At Bay—AI Or Otherwise
Whether AI-powered or not, cybercriminals are breaching our perimeters and compromising machines, networks and inboxes on a daily basis. Just as ChatGPT changes little about the attacks we face, it also changes little about how we defend against them.
The only answer is a robust and multilayered strategy, encompassing tools and technology as well as a focus on the human element.
Awareness is key. As our people are often the last line of cyber defense, they must be equipped for the task. That means ensuring everyone in your organization understands the potential tells of a phishing email and the importance of scrutinizing formatting, domains, addresses, URLs and attachments.
The more they understand, the less likely they are to fall afoul of such attacks (whoever the scribe), and the safer your organization will be as a result.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?